I'm trying to get used to using the 'electronegativity' tool to look around inside opensource electron projects. Just to get familiar with it.
So I'm running it on a non-descript open source project, (this one if you would like to know https://github.com/hello-efficiency-inc/raven-reader)
──────────┤
│ CONTEXT_ISOLATION_JS_CHECK │ /home/ask/Git/raven-reader/src/main/pocket.js │ 11:23 │ Review the use of the contextIsolation option │
│ HIGH | FIRM │ │ │ https://git.io/Jeu1p │
This looks interesting, it tells me that there are some issues with the context isolation in the pcoket.js file at line 11.
This would in my mind indicate that I can find something like this:
contextIsolation: true
However, when I look in the code in the location that electronegativity indicates, I just find this:
const authWindow = new BrowserWindow({
width: 1024,
height: 720,
show: true
})
And I can't find any mention of context isolation anywhere in the file. Does this mean that context isolation is finding something that isn't there? or is there something smelly in the creation of the Browserwindow?
Related
Analyzing the path, the Node.js considering the root as the part of the directory:
/home/user/dir/file.txt
┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
│ root │ │ name │ ext │
" / home/user/dir / file .txt "
└──────┴──────────────┴──────┴─────┘
C:\\path\\dir\\file.txt
┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
│ root │ │ name │ ext │
" C:\ path\dir \ file .txt "
└──────┴──────────────┴──────┴─────┘
Is it actually so? Developing a new library, I am thinking must I consider the root as the part of directory, or no.
Well, actually the "directory" is a fuzzy term. According the definition,
In computing, a directory is a file system cataloging structure which
contains references to other computer files, and possibly other
directories.
Wikipedia
Nothing that answers on my question. What else we know?
When we are using the cd (the abbreviation of "change directory") command, we are specifying the path relative to current location. Nothing related with root.
The cd command works inside the specific drive (at least, on Windows). This indirectly could means that the root and directory could be combined but initially separated.
More exact terms are the "absolute path of the directory" and "relative path of the directory". But what is the "directory" itself?
For the Windows case, the data storage name could be different on separate computers but it does not affect to files structure inside the storage. Again, the root and directory are separate in this case.
I'm using Pyspark, but I guess this is valid to scala as well
My data is stored on s3 in the following structure
main_folder
└── year=2022
└── month=03
├── day=01
│ ├── valid=false
│ │ └── example1.parquet
│ └── valid=true
│ └── example2.parquet
└── day=02
├── valid=false
│ └── example3.parquet
└── valid=true
└── example4.parquet
(For simplicity there is only one file in any folder, and only two days, in reality, there can be thousands of files and many days/months/years)
The files that are under the valid=true and valid=false partitions have a completely different schema, and I only want to read the files in the valid=true partition
I tried using the glob filter, but it fails with AnalysisException: Unable to infer schema for Parquet. It must be specified manually. which is a symptom of having no data (so no files matched)
spark.read.parquet('s3://main_folder', pathGlobFilter='*valid=true*)
I noticed that something like this works
spark.read.parquet('s3://main_folder', pathGlobFilter='*example4*)
however, as soon as I try to use a slash or do something above the bottom level it fails.
spark.read.parquet('s3://main_folder', pathGlobFilter='*/example4*)
spark.read.parquet('s3://main_folder', pathGlobFilter='*valid=true*example4*)
I did try to replace the * with ** in all locations, but it didn't work
pathGlobFilter seems to work only for the ending filename, but for subdirectories you can try below, however it may ignore partition discovery. To consider partition discovery add basePath property in load option
spark.read.format("parquet")\
.option("basePath","s3://main_folder")\
.load("s3://main_folder/*/*/*/valid=true/*")
However I am not sure if you can combine both wildcarding and pathGlobFilter if you want to match based on both subdirectories and end filenames.
Reference:
https://simplernerd.com/java-spark-read-multiple-files-with-glob/
https://spark.apache.org/docs/latest/sql-data-sources-parquet.html
Consider the following project directory:
root/
├──project_one/
│ ├──.cargo/
│ │ └──config.toml
│ ├──Cargo.toml
│ └──target.json
└──Cargo.toml
Cargo.toml is the workspace manifest with members = [ "project_one" ]
project_one/Cargo.toml is the project manifest
project_one/avr-atmega328p.json defines target properties used by rustc(?)
root/project_one/.cargo/config.toml listing target = "target.json" under [build]
Problem
project_one does not compile using the configured target unless I delete root/Cargo.toml.
The build fails with an error message indicating that information for a correct target platform is missing (error: language item required, but not found: 'eh_personality').
Potential Solution
At the time of writing there has been a very recent PR merged into the rust-lang master branch: https://github.com/rust-lang/cargo/pull/9030
Are there any other solutions to avoid having to wait for a fix?
I'm writing a terraform script to create an EKS cluster with its worker nodes on AWS. First time doing it so I'm a bit confused.
Here is the folder organisation:
├─── Int AWS Account
│ ├─── variables.tf
│ ├─── eks-cluster.tf (refers the modules)
│ ├─── others
│
├─── Prod AWS Account
│ ├─── (will be the same than Int with different settings in variables)
│
├─── ReadMe.md
│
├─── data sources
│
├─── Modules
│ ├─── cluster.tf
│ ├─── worker-nodes.tf
│ ├─── worker-nodes-sg.tf
I am a bit confused regarding how to use and pass variables. Right now, what I'm doing is that I refer to ${var.name} in the module folder, in the eks-cluster.tf, I either put a direct value name = blabla (mostly avoiding it), or refer to the variable again and have a variable file in the account folder.
Is that correct?
I'm not sure if I get your question correctly but in general you would want to keep your module files with variables only, as modules are intended to be generic so you can easily include them in different environments.
When including the module in eks_cluster_int.tf or eks_cluster_prod.tf you would then pass the values for all variables defined in the module itself. This way you can use the environment specific values in the same module.
module "cluster" {
source = "..."
var1 = value1 # directly passing value
var2 = ${var.int_specific_var} # can be defined in variables.tf of environment
...
}
Does this answer your question?
Is there a way to programmatically write on the bar below vim windows? I'm referring to the bar which displays the filename, cursor row + column, and the percentage of the document above the bottom of the window.
It is called the status line.
You can get more information by typing :help statusline.
This is the one I used which includes line and column at the bottom right.
set statusline=%f%m%r%h\ [%L]\ [%{&ff}]\ %y%=[%p%%]\ [line:%05l,col:%02v]
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ └─ column number
│ │ │ │ │ │ │ │ └─── line number
│ │ │ │ │ │ │ └── percentage in file
│ │ │ │ │ │ └── file type
│ │ │ │ │ └── file format (dos/unix)
│ │ │ │ └── total number of line in file
│ │ │ └── help flag
│ │ └── read only flag
│ └── modified flag : [+] if modified, [-] if not modifiable
└── relative`
The rendering is not ideal but the options, which are starting with the %sign, are described from left to right as you go down. They are all described in help.
This is a pretty static configuration, if you are willing to use a vim-plugin, there are some like vim-airline that provides more advanced features like git integration.
The information in that bar is set in the option statusline. You can set this from within a script by using let &statusline = just as you would any other vim option.
See :help statusline for more information.
This is my take on this problem.
I set laststatus to turn off the "dedicated" status line and just use the command line area for the status (using rulerformat instead of statusline) if there's only one window.
set laststatus=1
set statusline=%F\ %(%w%h%r%m%)%=%2v:%4l/%4L\ 0x%02B
set rulerformat=%25(%w%h%r%<%m%=%2v:%4l/%4L\ 0x%02B%)
The status line displays the filename (with path) then a space and puts optional indicators for [Preview], [help], [RO], and [+] (if the file is modified) depending on the file status. The single-window version leaves out the filename. In both, I then pad with spaces so the rest is right-justified, put two characters for the cursor column, then list the line number and total number of lines then the hex ASCII code for the character under the cursor.
There are a bunch of interesting examples in the help file, as others have said, check out :help statusline.