where is the root directory of setResource in chisel3? - riscv

I'm trying to use BlackBox inside the rocket-chip source code using chisel3.
Before, I tried it using chisel3 template, and it works well when I put the resource in the src/main/resources/alu/custom_ALU.v (setResource("/alu/custom_ALU.v"))
However, when I tried the same thing inside the rocket-chip repo, it shows me FileNotFoundException
Where is the root directory of setResource in the rocket-chip repository?

It appears that setResource is relative to the resource directory as defined by sbt. You are correct the default for this is src/main/resource if your code is in src/main/scala. The problem here, I believe, is rocket-chip invokes firrtl as a separate process instead of as a single multi-project run, so it's probably looking in firrtl's resource directory rocket-chip/firrtl/src/main/resources. This is obviously not very helpful so I think this should be changed. Would you mind filing an issue on the FIRRTL repo?

Related

Test behaves differently in workspace than as standalone project

Why would a test pass when included in a workspace, but fail when I move the entire directory out of the workspace. Everything builds correctly in both contexts, but the code behaves differently in the two cases, and so the test fails in the standalone project.
The workspace in question is this one: https://github.com/amethyst/rustrogueliketutorial
The project in the workspace is chapter-07-damage.
I can include the test, if necessary, but I'm mostly trying to understand what extra context is used in the workspace that causes the code to behave so differently.
Thanks, all!

Synchronise 2 repositories in git

I have my main project hosted in GitHub. Everything is good and works.
Now I'm trying to create a Solaris port. I make myself an OpenSolaris VM installed Solaris Studio as compiler/IDE and built.
Everything works fine.
Now what I'm thinking is that since Solaris Studio is completely different IDE/compiler from MSVC/Anjuta/Xcode, I should create a different repository (NOT A FORK) and push Solaris stuff there.
The only problem is - code synchronization.
If I make the change in my main repository and push it to remote, I want my second repository to be updated as well with the changes to the *.cpp/.h files.
Is there exist some kind of hook to do that?
Or maybe I'm better off with creating a fork? But then changes to the build system will be overwritten.
Please advise.
This is the current structure for the main project:
Project folder -> main app folder (*.cpp, *.h, *.sln, Makefile.am/Makefile.in, xcodeproj folder)
|
----> library 1 folder (*.cpp, *.h, *.sln, Makefile.am/Makefile.in, xcodeproj folder)
Or maybe I'm better off with creating a fork? But then changes to the build system will be overwritten.
I wouldn't even bother with a fork.
I would just make sure the build system is isolated in its own folder, which allows you to have in one repository two build configuration folders:
one for a default environment
one dedicated to a Solaris environment
That way, you can go on contributing to the code, from either one of those environments, without having to deal with any synchronization issue.

Is it okay to use a single shared directory as Cargo's target directory for all projects?

Cargo has the --target-dir flag which specifies a directory to store temporary or cached build artifacts. You also can set it user-wide in the ~/.cargo/config file. I'd like to set it to single shared directory to make maintenance easier.
I saw some artifact directories are suffixed with some unique(?) hashes in the target-dir which looks safe, but the final products are not suffixed with hashes, which doesn't seem to be safe for name clashes. I'm not sure on this as I am not an expert on Cargo.
I tried setting ~/.cargo/config to
[build]
target-dir = "./.build"
My original intention was to use the project's local ./.build directory, but somehow Cargo places all build files into ~/.build directory. I got curious what would happen I put all build files from every project into a single shared build directory.
It has worked well with several different projects so far, but working for a few samples doesn't mean it's designed or guaranteed to work with every case.
In my case, I am using single shared build directory for all projects of all workspaces of a user. Not only projects in a workspace. Literally every project in every workspace of a user. As far as I know, Cargo is designed to work with a local target directory. If it is designed to work with only local directory, a shared build directory is likely to cause some issues.
Rust/Cargo 1.38.0.
Yes, this is intended to be safe.
I agree with the comments that there are probably better methods of achieving your goal. Workspaces are a simple solution for a small group of crates, and sccache is a more principled caching mechanism.
See also:
Fix running Cargo concurrently (PR #2486)
Allow specifying a custom output directory (PR #1657)
Can I prevent cargo from rebuilding libraries with every new project?

How to update repository with built project?

I’m trying to set up GitLab CI/CD for an old client-side project that makes use of Grunt (https://github.com/yeoman/generator-angular).
Up to now the deployment worked like this:
run ’$ grunt build’ locally which built the project and created files in a ‘dist’ folder in the root of the project
commit changes
changes pulled onto production server
After creating the .gitlab-ci.yml and making a commit, the GitLab CI/CD job passes but the files in the ‘dist’ folder in the repository are not updated. If I define an artifact, I will get the changed files in the download. However I would prefer the files in ‘dist’ folder in the to be updated so we can carry on with the same workflow which suits us. Is this achievable?
I don't think commiting into your repo inside a pipeline is a good idea. Version control wouldn't be as clear, some people have automatic pipeline trigger when their repo is pushed, that'd trigger a loop of pipelines.
Instead, you might reorganize your environment to use Docker, there are numerous reasons for using Docker in a professional and development environments. To name just a few: that'd enable you to save the freshly built project into a registry and reuse it whenever needed right with the version you require and with the desired /dist inside. So that you can easily run it in multiple places, scale it, manage it etc.
If you changed to Docker you wouldn't actually have to do a thing in order to have the dist persistent, just push the image to the registry after the build is done.
But to actually answer your question:
There is a feature request hanging for a very long time for the same problem you asked about: here. Currently there is no safe and professional way to do it as GitLab members state. Although you can push back changes as one of the GitLab members suggested (Kamil Trzciński):
git push http://gitlab.com/group/project.git HEAD:my-branch
Just put it in your script section inside gitlab-ci file.
There are more hack'y methods presented there, but be sure to acknowledge risks that come with them (pipelines are more error prone and if configured in a wrong way, they might for example publish some confidential information and trigger an infinite pipelines loop to name a few).
I hope you found this useful.

Upload repository on compiled

I'm just wondering if there is a way (Linux / Unix) to update a Github repository when a particular file has compiled successfully?
So for example, I have a repository called 'Work' and if I compiled the file main.cpp and if it compiles successfully it automatically synchronises the file / repository on Github.
I hope this makes sense and someone can help me :)!
Thankss :)
You can do the other way. If you write proper hook commit will success only if main.cpp will compile.
I'm just wondering if there is a way (Linux / Unix) to update a Github
repository when a particular file has compiled successfully?
If you can get and analyze results of running gcc (exit-code or grepping output), you can do what you want in rather easy and small (2-3-liner) shell script, can't see any troubles here.
From my side I see your workflow as not bullet-proof (if you push sporadically, you have good chances to lost a lot of local work in case of disaster), just for sake I'll prefer "push all, tag compilable changeset"

Resources