rspec-puppet testing fails due to unavailable gems - puppet

I'm trying to write tests for rspec-puppet testing.
The module has the following tree:
|-- manifests
| `-- test_file.pp
|-- Rakefile
`-- spec
|-- classes
|-- defines
| `-- test_file_spec.rb
|-- fixtures
| |-- manifests
| | `-- site.pp
| `-- modules
| `-- test
| |-- files -> ../../../../files
| |-- lib -> ../../../../lib
| |-- manifests -> ../../../../manifests
| `-- templates -> ../../../../templates
|-- functions
|-- hosts
`-- spec_helper.rb
I am getting the below error when I run "rake rspec"
(in /etc/puppetlabs/puppet/modules/offshore/test)
rake aborted!
no such file to load -- rspec/core/rake_task
/etc/puppetlabs/puppet/modules/offshore/test/Rakefile:2
(See full trace by running task with --trace)
When I run "rake spec --trace" it gives the following:
rake aborted!
no such file to load -- rspec/core/rake_task
/usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require'
/usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'
/etc/puppetlabs/puppet/modules/offshore/test/Rakefile:2
/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2382:in `load'
/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2382:in `raw_load_rakefile'
/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2016:in `load_rakefile'
/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2067:in `standard_exception_handling'
/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2015:in `load_rakefile'
/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1999:in `run'
/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2067:in `standard_exception_handling'
/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1997:in `run'
/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31
/usr/bin/rake:19:in `load'
/usr/bin/rake:19
Can some one help me with setting it up?

You need a Gemfile with the following content in the root of your module:
source 'http://rubygems.org'
group :test do
gem 'rake'
gem 'puppet', ENV['PUPPET_VERSION'] || '~> 3.4.0'
gem 'puppet-lint'
gem 'rspec-puppet', :git => 'https://github.com/rodjek/rspec-puppet.git'
gem 'puppet-syntax'
gem 'puppetlabs_spec_helper'
gem 'simplecov'
gem 'metadata-json-lint'
end
Then run bundle install
Then run bundle exec rake spec
There's also a useful tool puppet-retrospec that will automatically add specs to an existing module: https://github.com/logicminds/puppet-retrospec, this might help you.

Related

GitHub Actions not creating Rust binaries

I am using GitHub Actions to cross-compile my Rust program. The action completes successfully, and files are created in the target directory, but there is no binary. This is my workflow file:
name: Compile and save program
on:
push:
branches: [main]
paths-ignore: ["samples/**", "**.md"]
workflow_dispatch:
jobs:
build:
strategy:
fail-fast: false
matrix:
target:
- aarch64-unknown-linux-gnu
- i686-pc-windows-gnu
- i686-unknown-linux-gnu
- x86_64-pc-windows-gnu
- x86_64-unknown-linux-gnu
name: Build executable
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v3
- name: Set up Rust
uses: actions-rs/toolchain#v1
with:
toolchain: stable
- name: Install dependencies
run: |
rustup target add ${{ matrix.target }}
- name: Build
uses: actions-rs/cargo#v1
with:
use-cross: true
command: build
args: --target ${{ matrix.target }} --release --all-features --target-dir=/tmp
- name: Debug missing files
run: |
echo "target dir:"
ls -a /tmp/release
echo "deps:"
ls -a /tmp/release/deps
- name: Archive production artifacts
uses: actions/upload-artifact#v3
with:
name: ${{ matrix.target }}
path: |
/tmp/release
And this is the layout of the created directory when targeting Windows x86_64 (the only difference when targeting other platforms is the names of the directories within .fingerprint and build):
.
├── .cargo-lock
├── .fingerprint/
│ ├── libc-d2565b572b77baea/
│ ├── winapi-619d3257e8f28792/
│ └── winapi-x86_64-pc-windows-gnu-7e7040207fbb5417/
├── build/
│ ├── libc-d2565b572b77baea/
│ ├── winapi-619d3257e8f28792/
│ └── winapi-x86_64-pc-windows-gnu-7e7040207fbb5417/
├── deps/
│ └── <empty>
├── examples/
│ └── <empty>
└── incremental/
└── <empty>
As you can see, there is no binary, and this is reflected in the uploaded artifact.
What is causing this?
EDIT 1
The program builds fine on my local device. My .cargo/config.toml is below.
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
And this is my Cargo.toml:
[package]
name = "brainfuck"
version = "0.4.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
console = "0.15.2"
either = "1.8.0"
EDIT 2
While messing around in a test repo, I discovered that this issue only arises when specifying the target. If I don’t specify a target and just use the default system target, I get a binary as expected.
It turns out I didn’t read the cargo docs properly. The build cache docs mention that the results of a build with a specified target are stored in target/<triple>/debug/, and that is indeed where they were.

yarn: install local package from monorepo and use it inside docker image with offline cache

My folder structure looks like this (monorepo):
project
|
+--- /api
| |
| +--- /.offline-cache
| +--- /src
| | +--- index.js
| | +--- ...
| |
| +--- Dockerfile
| +--- package.json
| +--- yarn.lock
|
+--- /common
| |
| +--- /src
| | +--- index.js
| |
| +--- package.json
|
+--- /ui
| |
| +--- /.offline-cache
| +--- /src
| | +--- index.js
| | +--- ...
| |
| +--- Dockerfile
| +--- package.json
| +--- yarn.lock
|
+--- docker-compose.yml
The offline-cache and building the docker-images for every 'service' (ui, api) are working.
Now I want to access/install the common module inside api and ui as well.
Running yarn add ./../common inside /api works and installs the module inside the api folder and adds it to package.json and yarn.lock file.
But when I try to rebuild the docker-image I get an error telling me
error Package "" refers to a non-existing file '"/common"'.
That's because there is no common folder inside the docker container and the installed package isn't added to the offline-mirror :(
I can't copy the common folder to the docker-image because it is outside the build context and I don't want to publish to NPM. What else can I do to get this working?
You can specify a context in your docker-compose.yml, which does not need to be the same directory as your Dockerfile.
So you can create something like this:
version: '3.5'
services:
ui:
build:
context: .
dockerfile: ui/Dockerfile
ports:
- 'xxxx:xxxx'
api:
build:
context: .
dockerfile: api/Dockerfile
ports:
- 'xxxx:xxxx'
The same thing can be done with docker build as well, by adding the -f option, while running the command from the root directory.
docker build -f ui/Dockerfile xxxxxx/ui .
docker build -f api/Dockerfile xxxxxx/api .
You need to be aware, that you have to modify your Dockerfile slightly as well, to match the file structure of the project (using WORKDIR).
FROM node:18-alpine
# switch to root and copy all relevant files
WORKDIR /app
COPY ./ui/ ./ui/
COPY ./common/ ./common/
# switch to relevant path (in this case ui)
WORKDIR /app/ui
RUN yarn && yarn build
CMD ["yarn", "start"]

How can I run server.js on Python based Google App Engine service

My project is Python Google App Engine. and front-end code is written in Angular-Universal. so I want to run server.js for Server Side Rendering on this Project. But I don't know how to run this server.js (node.js express code) on Google App Engine of Python project.
when start this command 'npm run build', server.js and server folder contains server side js are generated, and also browser folder contains front end js are generated.
App.yaml is below.
service: my-project
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /static
static_dir: static
- url: /assets
static_dir: assets
- url: /.*
static_files: index.html
upload: index.html
generated /dist folder after npm run build:dynamic
/dist
|
ー app.yaml
|
ー browser folder
| |
| _ assets folder
| |
| ー index.html
| |
| ー ......
ー prerender.js
|
|
ー server.js
|
ー server folder
|
ーassets folder
|
ーmain.bundle.js
|
ー .............
anyone help please!!

Handler in subdirectory of AWS Lambda function not running

I'm getting an awfully unfortunate error on Lambda:
Unable to import module 'lib/index': Error
at require (internal/module.js:20:19)
Which is strange because there is definitely a function called handler getting exported from lib/index...not sure if the whole subdirectory thing has been an issue for others so I wanted to ask.
sam-template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Does something crazy
Resources:
SomeFunction:
Type: AWS::Serverless::Function
Properties:
Handler: lib/index.handler
Role: arn:aws:iam::...:role/lambda-role
Runtime: nodejs6.10
Timeout: 10
Events:
Timer:
Type: Schedule
Properties:
Schedule: rate(1 minute)
Module structure
|-- lib
| `-- index.js
`-- src
`-- index.js
I have nested it here because I'm transpiling ES6 during my build process using the following, excerpt from package.json:
"build": "babel src -d lib"
buildspec.yaml
version: 0.1
phases:
install:
commands:
- npm install
- aws cloudformation package --template-file sam-template.yaml --s3-bucket some-bucket --output-template-file compiled-sam-template.yaml
build:
commands:
- npm run build
post_build:
commands:
- npm prune --production
artifacts:
files:
- 'node_modules/**/*'
- 'lib/**/*'
- 'compiled-template.yaml'
The aws cloudformation package command is shipping the built assets, which is run in the install phase of the shown code. Moving it to the post_build will ensure it captures everything needed, including the lib/index in question:
post_build:
commands:
- npm prune --production
- aws cloudformation package ...
You are trying to import lib/index which will try to find a package named lib as if you did npm install --save lib but you are most likely trying to import a file relative to your own project and you are not giving it a relative path in your import.
Change 'lib/index' to './lib/index' - or '../lib/index' etc. - depending where it is and see if it helps.
By the way, if you're trying to import the file lib/index.js and not a directory lib/import/ then you may use a shorter ./lib path, as in:
const lib = require('./lib');
Of course you didn't show even a single line of your code so I can only guess what you're doing.
Your handler should be .lib/index.handler considering your index.js file is in a subdirectory lib.
The reference to the handler must be relative to the lambda to be execute;
Ex:
if the file lambda is placed in the path:
x-lambda/yyy/lambda.py
the handler must be:
..yyy/lambda.lambda_handler
it suppose that in the lambda.py exist the function: lambda_handler()

Docker: creating multiple containers from only one image

Situation: I want a docker structure where I can have different projects under a common root. (normally with a mean structure) but using different images (not all in a mean image). I want all the projects shared in the container that I create for a specific project but only launch the configuration of the project that I want. The image just install the basic dependences and the modules specifics for every apps are installed by the docker-compose override.
That is my ideal structure:
apps
| +--app1
| | +--node_modules //empty in host
| | +--package.json
| | +--docker-compose.app1.yml //override compose
| | +--index.js
| | +--...
| +--app2
| | +--node_modules //empty in host
| | +--...
| ....
| +--node_modules //global node_modules folder (empty in host)
| docker-compose.yml //principal compose
| Dockerfile //my personalized nodejs image config
I'm using a intermediate solution:
apps
| +--app1
| | +--index.js
| | +--...
| +--app2
| | +--index.js
| | +--...
| ....
| +--node_modules //global for all projects (empty in host)
| docker-compose.yml //principal compose
| docker-compose.app1.yml //override for app1
| docker-compose.app2.yml //override for app2
| ....
| Dockerfile //my personalized nodejs image config
DOCKERFILE
FROM node #base image
# User no root
RUN useradd --user-group --create-home --shell /bin/false apps
ENV HOME=/home/apps #my home folder
# Copy modules config and dependencies
COPY package.json npm-shrinkwrap.json $HOME/
RUN chown -R apps:apps $HOME/ #changing permissions no root user
WORKDIR $HOME
RUN npm install #install modules
RUN npm cache clean #clean cache
# user no root active
USER apps
Docker-compose
version: "2"
services:
web:
build: .
image: myNodejs:0.1
container_name: my_nodejs
volumes:
- .:/home/apps #sharing my host route with the container
- /home/apps/curso_mean/node_modules # route to node_modules
Docker-compose.app1
version: "2"
services:
web:
container_name: app1
command: node app1/index.js
ports:
- "3000:3000"
environment:
- NODE_ENV=development
Command to launch app1
docker-compose -f docker-compose.yml -f docker-compose.app1.yml up -d
The problem here is that I have the same node_modules for all the projects and are installed in the image (I would prefer install that in the compose) but I only use a image and I can have different configuration for every project.
Questions
Is possible have the first structure?
Is possible install the specific modules in the docker-compose override executing more than one command or for other way and keep the global node_modules installed in the image?

Resources