How can I get objects that are built under separate SConscripts in my program correctly? I have a directory structure that looks similar to this:
SConstruct
|
|---- moduleA
| |
| ----Sconscript
| ----src
| |-- sourceA1.cpp
| |-- sourceA2.cpp
| ----build
| |-- sourceA1.o
| |-- sourceA2.o
|
|---- moduleB
| |
| ----Sconscript
| ----src
| |-- sourceB1.cpp
| |-- sourceB2.cpp
| ----build
| |-- sourceB1.o
| |-- sourceB2.o
|
|---- Program
|
----SConscript
----src
| -- main.cpp
moduleA's SConscript might look like this (moduleB is similar):
moduleEnv = env.Clone()
srcFiles = [ 'src/sourceA1.cpp', 'src/sourceA2.cpp' ]
obj = moduleEnv.Object(srcFiles)
In my Program, I need to use the object files to build. I don't want to use a bunch of hard coded paths to do this. Is there any way that the SConscript for the program can reference the objects of the modules?
Program's SConscript
progEnv = env.Clone()
objectsList = [ Objects from the modules ] # Is this possible, or is the a better way?
prog = progEnv.Program('myprogram', [ objectList, 'src/main.cpp'])
Any variable you define in an SConscript can be Export()ed so other SConscripts can pick it up (via Import()). The export space is global. In your example for moduleA's SConscript, after the call to Object, obj holds a list of Nodes (even though in the majority of cases that list has only one entry - here it would be two). A node is SCons' internal representation of an object participating in the build, and it's prefectly fine to use those to refer to built objects - and means you shouldn't have path problems, because SCons always knows how to locate a node. You can collect those and export them. I would probably do one per module, so perhaps:
obj = moduleEnv.Object(srcFiles)
moduleEnv.Export(moduleAobj=obj)
Then your main SConscript can look like:
SConscript("moduleA/SConscript")
Import("moduleAobj")
SConscript("moduleB/SConscript")
Import("moduleBobj")
objects = Flatten([moduleAobj, moduleBobj])
that's untested but hopefully a starting point, at least.
Related
Context
After applying a line length limit of 80 characters on the pre-commit check of markdown-lint, I was experiencing some difficulties in including a markdown table that I create with more width than 80 characters.
Note
I see value in applying the linter to the README.md because I quite often forget about the line length while typing the README.md. (In essence, the trivial solution: disable the linter or disable MD013 everywhere, is considered sub-optimal).
Pre-commit of MarkdownLint
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
Markdown table example
| Algorithm | Encoding | Adaptation | Radiation | Backend |
| ------------------------------------ | -------- | ---------- | ------------ | ---------------------------- |
| Minimum Dominating Set Approximation | Sparse | Redundancy | Neuron Death | - networkx LIF<br>- Lava LIF |
| Some Algorithm Approximation | Sparse | Redundancy | Neuron Death | - networkx LIF<br>- Lava LIF |
| | | | | |
Approach I
First I tried to include a ignore MD013 (line length check) in the relevant section of the Markdown table, however, Markdown Lint does not support such an option.
Approach II
I tried to manually apply the new line breaks to the table, however, that results in additional rows in the table:
Question
How can I stay within the 80 lines whilst including a wide markdown table, (without generating new horizontal lines)?
You could try changing your hook to another similar project: igorshubovych/markdownlint-cli
repos:
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.32.2
hooks:
- id: markdownlint
args: ["--fix"]
You may include a .markdownlint.yaml file in the same directory as your .pre-commit-config.yaml. Set the line length rule but ignore it for tables. Like so:
# Default state for all rules
default: true
# MD013/line-length - Line length
MD013:
line_length: 80
tables: false
Check the .markdownlint.yaml schema for other configuration options.
as you can see in the picture down below, I created a TEST package, with two sub-packages(test1, test2), this is the structure of the package:
\test
| __pycache__.py # ki importit lib fell main tcrea
| __init__.py
| lib.py
| \test1
| | __init__.py
| | test1.py
| \test2
| | __init__.py
| | test2.py
I wanted to import lib inside test1.py
I have tried the next code( as I sawed in the tutorials):
import lib
lib.test()
# from lib import test
# test()
# from ..lib import test
# test()
I tried to execute the code with vs code cmd from the TEST package then I entered to test1 sub-package(cd .\test1) to try but it didn't work.
I got those errors:
THANKS FOR HELP!!
I have an xml with a structure like this one:
<cat>
<foo>
<fooID>1</fooID>
<fooName>One</fooName>
<bar>
<barID>a</barID>
<barName>small_a</barName>
<barClass>
<baz>
<qux>
<corge>
<corgeName>...</corgeName>
<corgeType>
<corgeReport>
<corgeReportRes Reference="x" Channel="High">
<Pos>1</Pos>
</corgeReportRes>
</corgeReport>
</corgeType>
</corge>
</qux>
</baz>
</barClass>
</bar>
<bar>
<barID>b</barID>
<barName>small_b</barName>
<barClass>
<baz>
<qux>
<corge>
<corgeName>...</corgeName>
<corgeType>
<corgeReport>
<corgeReportRes Reference="y" Channel="High">
<Pos>1</Pos>
</corgeReportRes>
</corgeReport>
</corgeType>
</corge>
</qux>
</baz>
</barClass>
</bar>
</foo>
<foo>
<fooID>2</fooID>
<fooName>Two</fooName>
<bar>
<barID>c</barID>
<barName>small_c</barName>
<barClass>
<baz>
<qux>
<corge>
<corgeName>...</corgeName>
<corgeType>
<corgeReport>
<corgeReportRes Reference="z" Channel="High">
<Pos>1</Pos>
</corgeReportRes>
</corgeReport>
</corgeType>
</corge>
</qux>
</baz>
</barClass>
</bar>
</foo>
</cat>
And, I would like to obtain the values of specific parent/grand parent/grand grand parent tags that have a node with attribute Channel="High". I would like to obtain only fooID value, fooName value, barID value, barName value.
I have the following code in Python 3:
import xml.etree.ElementTree as xmlET
root = xmlET.parse('file.xml').getroot()
test = root.findall(".//*[#Channel='High']")
Which is actually giving me a list of elements that match, however, I still need the information of the specific parents/grand parents/grand grand parents.
How could I do that?
fooID | fooName | barID | barName
- - - - - - - - - - - - - - - - -
1 | One | a | small_a <-- This is the information I'm interested
1 | One | b | small_b <-- Also this
2 | Two | c | small_c <-- And this
Edit: fooID and fooName nodes are siblings of the grand-grand-parent bar, the one that contains the Channel="High". It's almost the same case for barID and barName, they are siblings of the grand-parent barClass, the one that contains the Channel="High". Also, what I want to obtain is the values 1, One, a and small_a, not filtering by it, since there will be multiple foo blocks.
If I understand you correctly, you are probably looking for something like this (using python):
from lxml import etree
foos = """[your xml above]"""
items = []
for entry in doc.xpath('//foo[.//corgeReportRes[#Channel="High"]]'):
items.append(entry.xpath('./fooID/text()')[0])
items.append(entry.xpath('./fooName/text()')[0])
items.append(entry.xpath('./bar/barID/text()')[0])
items.append(entry.xpath('./bar/barName/text()')[0])
print('fooID | fooName | barID | barName')
print(' | '.join(items))
Output:
fooID | fooName | barID | barName
1 | One | a | small_a
I'm attempting to get a working prototype of the following scenario:
Language: Rust (rustc 1.45.0-nightly (ad4bc3323 2020-06-01))
Framework: Rocket v0.4.4
Build Tool: Bazel
Platform: Mac OS X / Darwin x64
Running bazel build //web-api yields the below error. I believe, based on looking at the Cargo.lock file it is because Rocket's dependency on the hyper library specifies a dependency on the log 0.3.9 library. For whatever reason it is not using the more recent log=0.4.x. That said, I don't know why it's pulling this library since, if I build it manually, it works fine.
ERROR: /private/var/tmp/_bazel_nathanielford/2a39169ea9f6eb02fe788b12f9eae88f/external/raze__log__0_3_9/BUILD.bazel:27:1: error executing shell command: '/bin/bash -c CARGO_MANIFEST_DIR=$(pwd)/external/raze__log__0_3_9 external/rust_darwin_x86_64/bin/rustc "$#" --remap-path-prefix="$(pwd)"=__bazel_redacted_pwd external/raze__log__0_3_9/src/lib.rs -...' failed (Exit 1) bash failed: error executing command /bin/bash -c 'CARGO_MANIFEST_DIR=$(pwd)/external/raze__log__0_3_9 external/rust_darwin_x86_64/bin/rustc "$#" --remap-path-prefix="$(pwd)"=__bazel_redacted_pwd' '' external/raze__log__0_3_9/src/lib.rs ... (remaining 24 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
error[E0425]: cannot find function `set_logger` in crate `log`
--> external/raze__log__0_3_9/src/lib.rs:731:16
|
731 | match log::set_logger(&ADAPTOR) {
| ^^^^^^^^^^ not found in `log`
|
help: consider importing this function
|
204 | use set_logger;
|
The following is my directory structure:
/
|-WORKSPACE
|-BUILD # Empty
|-web-api/
| |-BUILD
| |-src/
| | |-main.rs
| |-cargo/
| |-Cargo.toml
| |-Cargo.lock
| |-BUILD.bazel
| |-remote/
| |-... (Cargo-raze files)
In order to set up the cargo-raze I did the following, following instructions from the github page.:
$ cd web-api/cargo
$ cargo generate-lockfile
$ cargo vendor --versioned-dirs --locked
$ cargo raze
(The generate-lockfile is what creates the Cargo.lock file, and the cargo raze is what creates the BUILD.bazel file and all the contents of the remote sub directory.)
And then to execute the bazel build I go back to the root and run bazel build //web-api, which produces the above error.
This is my WORKSPACE file:
workspace(name = "rocket-bazel")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_rust",
sha256 = "f21c67fc2fef9d57fa3c81fde1defd9e57d451883388c0a469ec1c470fd30dcb",
strip_prefix = "rules_rust-master",
urls = [
"https://github.com/bazelbuild/rules_rust/archive/master.tar.gz"
],
)
http_archive(
name = "bazel_skylib",
sha256 = "9a737999532daca978a158f94e77e9af6a6a169709c0cee274f0a4c3359519bd",
strip_prefix = "bazel-skylib-1.0.0",
url = "https://github.com/bazelbuild/bazel-skylib/archive/1.0.0.tar.gz",
)
load("#io_bazel_rules_rust//rust:repositories.bzl", "rust_repositories")
rust_repositories(version="nightly", iso_date="2020-06-02")
load("#io_bazel_rules_rust//:workspace.bzl", "bazel_version")
bazel_version(name = "bazel_version")
load("//web-api/cargo:crates.bzl", "raze_fetch_remote_crates")
raze_fetch_remote_crates()
This is my web-api/BUILD file:
load("#io_bazel_rules_rust//rust:rust.bzl", "rust_binary")
rust_binary(
name = "web-api",
srcs = ["src/main.rs"],
deps = [
"//web-api/cargo:rocket",
],
)
And my web-api/cargo/Cargo.toml file:
load("#io_bazel_rules_rust//rust:rust.bzl", "rust_binary")
rust_binary(
name = "web-api",
srcs = ["src/main.rs"],
deps = [
"//web-api/cargo:rocket",
],
)
I've run out of ideas as to what to try. I can get this to compile without Bazel, just using rust (though obviously the files are in slightly different places). I can get it to compile inside a Docker container. I just can't get Bazel (necessarily with cargo raze, either in vendor or remote mode) to run successfully: I assume that there is some mismatch in compile target or the nightly build that is not being properly set - but I'm not sure how to diagnose or get past that.
Here is a link to a repository with the files/structure I tried.
I had a similar issue when I made a minimal Bazel workspace with rust and the log crate together with env_logger crate. I found a similar issue when you try to compile without features = ["std"]. I then tried to enable that in Cargo.toml on the log dependency without success.
My solution is that in Cargo.toml under [raze] I added:
default_gen_buildrs = true
I could trace it down to that when default_gen_buildrs flag is not set in the generated log crate the BUILD.bazel file did not have a cargo_build_script definition or this:
crate_features = [
"std",
],
I currently use spaCy to traverse the dependency tree, and generate entities.
nlp = get_spacy_model(detect_lang(unicode_text))
doc = nlp(unicode_text)
entities = set()
for sentence in doc.sents:
# traverse tree picking up entities
for token in sentence.subtree:
## pick entitites using some pre-defined rules
entities.discard('')
return entities
Are there any good Java alternatives for spaCy?
I am looking for libs which generate the Dependency Tree as is done by spaCy.
EDIT:
I looked into Stanford Parser. However, it generated the following parse tree:
ROOT
|
NP
_______________|_________
| NP
| _________|___
| | PP
| | ________|___
NP NP | NP
____|__________ | | _______|____
DT JJ JJ NN NNS IN DT JJ NN
| | | | | | | | |
the quick brown fox jumps over the lazy dog
However, I am looking for a tree structure like spaCy does:
jumps_VBZ
__________________________|___________________
| | | | | over_IN
| | | | | |
| | | | | dog_NN
| | | | | _______|_______
The_DT quick_JJ brown_JJ fox_NN ._. the_DT lazy_JJ
You're looking for the Stanford Dependency Parser. Like most of the Stanford tools, this is also bundled with Stanford CoreNLP under the depparse annotator. Other parsers include the Malt parser (a feature-based shift reduce parser) and Ryan McDonald's the MST parser (an accurate but slower maximum spanning tree parser).
Another solution to integrate with Java and other languages is by using Spacy REST API. For example https://github.com/jgontrum/spacy-api-docker provide a Dockerization of Spacy REST API.
I recently released spaCy4j which mimics Token container objects from spaCy and integrates with spaCy server or CoreNLP.
Once you have a running docker of spacy-server (very easy to set up) it's as easy as:
// Create a new spacy-server adapter with host and port matching a running instance of spacy-server.
SpaCyAdapter adapter = SpaCyServerAdapter.create("localhost", 8080);
// Create a new SpaCy object. It is thread safe and should be reused across our app
SpaCy spacy = SpaCy.create(adapter);
// Parse a doc
Doc doc = spacy.nlp("My head feels like a frisbee, twice its normal size.");
// Inspect tokens
for (Token token : doc.tokens()) {
System.out.printf("Token: %s, Tag: %s, Pos: %s, Dependency: %s%n",
token.text(), token.tag(), token.pos(), token.dependency());
}
Feel free to contact via github for any questions etc.
spacy can be run through java program.
The env should be created first from command prompt by executing the following commands
python3 -m venv env
source ./env/bin/activate
pip install -U spacy
python -m spacy download en
python -m spacy download de
create a bash file spacyt.sh with following commands,parallel to env folder
#!/bin/bash
python3 -m venv env
source ./env/bin/activate
python test1.py
place the spacy code as python script, test1.py
import spacy
print('This is a test script of spacy')
nlp=spacy.load("en_core_web_sm")
doc=nlp(u"This is a sentence")
print([(w.text, w.pos_) for w in doc])
// instead of print we can write to a file for further processing
In java program run the bash file
String cmd="./spacyt.sh";
try {
Process p = Runtime.getRuntime().exec(cmd);
p.waitFor();
System.out.println("cmdT executed!");
} catch (Exception e) {
e.printStackTrace();
}