Terraform: AWS Codepipeline multiple Codecommit sources - terraform

I am moving away from Github.com and to Codecommit, I have been leveraging terraforms modular approach to import GitHub repos as modules for years. That said Codecommit is very different in that nature. I have seen where people leverage SSH to clone the repos locally but I have also noticed codepipeline can leverage multiple sources. I need a way to add multiple repos to my pipeline so I can replicate the modular github approach offered by terraform. I want that code locally to execute it in a modular fashion.
I have googled looking for an example that shows me how to leverage multiple codecommmit resources in my pipeline and i can not find anything that clearly outlines how to leverage multiple resources in terraform. Has anyone figured this out or have examples they can point me to?

Looking into this, I have found that it's not very well documented anywhere which is actually very frustrating. Leveraging hashicorp vague description of the service and AWS multi-input example I was finally able to come up with this for terraform:
"aws_codepipeline" "foo" {
name = "tf-test-pipeline"
role_arn = "codepipeline service role arn"
artifact_store {
location = "s3 bucket name, NOT THE ARN"
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["src"]
configuration = {
RepositoryName = "vpc" //MUST BE the name of the your codecommit repo
BranchName = "master"
}
run_order = "1"
}
action {
name = "2ndSource" //you can make this any name
category = "Source"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["src2"]
configuration = {
RepositoryName = "ec2"
BranchName = "master"
}
run_order = "2"
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["src","src2"] //pass through both repositories
version = "1"
configuration = {
ProjectName = "codebuild_project_name"
PrimarySource = "Source"
}
}
}
}
The trick here is to add additional sources into one stage, not separate ones. The reference below shows two of them but I have been able to add three with no problem.
Reference Links:
Hashicorp CodePipeline
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codepipeline#run_order
AWS Multiple Inputs Json Example:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-pipeline-multi-input-output.html
For those of you getting started for the first time, I recommend this link, it's pretty comprehensive and walks you through the entire build process which includes roles and policies:
https://medium.com/swlh/intro-to-aws-codecommit-codepipeline-and-codebuild-with-terraform-179f4310fe07

# _____ ____ _ _ _____ _____ ______
# / ____|/ __ \| | | | __ \ / ____| ____|
# | (___ | | | | | | | |__) | | | |__
# \___ \| | | | | | | _ /| | | __|
# ____) | |__| | |__| | | \ \| |____| |____
# |_____/ \____/ \____/|_| \_\\_____|______|
Stages:
- Name: Source
Actions:
- ActionTypeId:
Category: Source
Owner: AWS
Provider: CodeStarSourceConnection
Version: "1"
Configuration:
ConnectionArn: !Ref CodeStarConnectionArn
FullRepositoryId: !Ref BitBucketRepo
BranchName: !Ref BitBucketRepoReleaseBranch
OutputArtifactFormat: "CODE_ZIP"
DetectChanges: true
Name: SourceCode
OutputArtifacts:
- Name: !Sub ${SourceArtifactName}
Namespace: SourceVariables1
RunOrder: 1
- ActionTypeId:
Category: Source
Owner: AWS
Provider: CodeStarSourceConnection
Version: "1"
Configuration:
ConnectionArn: !Ref CodeStarConnectionArn
FullRepositoryId: !Ref PipelineBitBucketRepo
BranchName: !Ref PipelineBitBucketRepoReleaseBranch
OutputArtifactFormat: "CODE_ZIP"
DetectChanges: true
Name: PipelineDefinition
OutputArtifacts:
- Name: !Sub ${PipelineCodeArtifactName}
Namespace: SourceVariables2
RunOrder: 1
# _____ ______ _ ______ __ __ _ _ _______ _______ ______
# / ____| ____| | | ____| | \/ | | | |__ __|/\|__ __| ____|
# | (___ | |__ | | | |__ | \ / | | | | | | / \ | | | |__
# \___ \| __| | | | __| | |\/| | | | | | | / /\ \ | | | __|
# ____) | |____| |____| | | | | | |__| | | |/ ____ \| | | |____
# |_____/|______|______|_| |_| |_|\____/ |_/_/ \_\_| |______|
- !If
- ShouldUpatePipelineStackOnChange
- Name: UpdatePipeline
Actions:
- Name: CreateChangeSet
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: "1"
Configuration:
ActionMode: CHANGE_SET_REPLACE
StackName: !Ref AWS::StackName
ChangeSetName: !Sub ${AWS::StackName}-ChangeSet
TemplatePath: !Sub ${PipelineCodeArtifactName}::${PipelineTemplateName}
Capabilities: CAPABILITY_NAMED_IAM
RoleArn: !GetAtt PipelineStackCloudFormationExecutionRole.Arn
InputArtifacts:
- Name: !Sub ${PipelineCodeArtifactName}
RunOrder: 1
- Name: ExecuteChangeSet
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: "1"
Configuration:
ActionMode: CHANGE_SET_EXECUTE
StackName: !Ref AWS::StackName
ChangeSetName: !Sub ${AWS::StackName}-ChangeSet
RoleArn: !GetAtt PipelineStackCloudFormationExecutionRole.Arn
OutputArtifacts:
- Name: !Sub ${AWS::StackName}ChangeSet
RunOrder: 2
- !Ref AWS::NoValue

Related

Unable to initialize GDBusConnection with cellular modem on linux

I am currently trying to access cellular modem data from within a C application on Linux using mmlib-glib.
When trying to establish a GDBusConnection on a GFileIOStream created from a GFile from path /dev/cdc-wdm0, the call to g_dbus_connection_new_sync() hangs infinitely and gives the following error _g_dbus_worker_do_read_cb: error determining bytes needed: Unable to determine message blob length - given blob is malformed. Also, after this the modem will either become no longer available, or be kicked to a different modem number (i.e. org/freedesktop/ModemManager/Modem/2 will change to 3) I am constructing a GFile with the path /dev/cdc-wdm0, and from that initialize a GDBusConnection.
I have also tried initializing a new GIOStream and passing that to the mm_manager_scan_devices_sync() following instructions from this post but this results in the following errors :
g_dbus_connection_signal_subscribe: assertion 'sender == NULL || (g_dbus_is_name (sender) && (connection->flags & G_DBUS_CONNECTION_FLAGS_MESSAGE_BUS_CONNECTION))' failed
I have placed code below followed by mmcli output with modem information
#include <libmm-glib.h>
#include <gio/gio.h>
//#include <gtk/gtk.h>
#include <stdio.h>
#include <stdbool.h>
GDBusConnection *pConnection;
int main (void)
{
printf("begin\n");
GFile * pFile = g_file_new_for_path("/dev/cdc-wdm0");
GFileIOStream *pStream = g_file_open_readwrite(pFile, NULL, NULL);
pConnection= g_dbus_connection_new_sync(pStream,
guid, // for server auth.
G_DBUS_CONNECTION_FLAGS_MESSAGE_BUS_CONNECTION,
NULL, // observer
NULL, // cancellable,
NULL);
pManager = mm_manager_new_sync(pConnection,
G_DBUS_CONNECTION_FLAGS_MESSAGE_BUS_CONNECTION,
NULL,
NULL);
mm_manager_scan_devices_sync(pManager,
cancellable,
NULL);
printf("end\n");
return(0);
}
Below is the output from modemmanager mmcli --modem=0
----------------------------------
General | path: /org/freedesktop/ModemManager1/Modem/0
| device id:
----------------------------------
Hardware | manufacturer: Telit
| model: LE910C4-NF
| firmware revision: 25.21.660 1 [Mar 04 2021 12:00:00]
| carrier config: default
| h/w revision: 1.30
| supported: gsm-umts, lte
| current: gsm-umts, lte
| equipment id: 0
----------------------------------
System | device: /sys/devices/platform/soc#0/32c00000.bus/32e50000.usb/ci_hdrc.1/usb1/1-1/1-1.2
| drivers: qmi_wwan, option
| plugin: telit
| primary port: cdc-wdm0
| ports: cdc-wdm0 (qmi), ttyUSB0 (ignored), ttyUSB1 (gps),
| ttyUSB4 (ignored), wwan0 (net)
----------------------------------
Status | lock: sim-pin2
| unlock retries: sim-pin (3), sim-puk (10), sim-pin2 (10), sim-puk2 (10)
| state: connected
| power state: on
| access tech: lte
| signal quality: 75% (cached)
----------------------------------
Modes | supported: allowed: 3g; preferred: none
| allowed: 4g; preferred: none
| allowed: 3g, 4g; preferred: 4g
| allowed: 3g, 4g; preferred: 3g
| current: allowed: 3g, 4g; preferred: 4g
----------------------------------
Bands | supported: utran-4, utran-5, utran-2, eutran-2, eutran-4, eutran-5,
| eutran-12, eutran-13, eutran-14, eutran-66, eutran-71
| current: utran-4, utran-5, utran-2, eutran-2, eutran-4, eutran-5,
| eutran-12, eutran-13, eutran-14, eutran-66, eutran-71
----------------------------------
IP | supported: ipv4, ipv6, ipv4v6
----------------------------------
3GPP | imei: 3
| enabled locks: fixed-dialing
| operator id: 311480
| operator name: VZW
| registration: home
----------------------------------
3GPP EPS | initial bearer path: /org/freedesktop/ModemManager1/Bearer/0
| initial bearer apn: super
| initial bearer ip type: ipv4
----------------------------------
SIM | primary sim path: /org/freedesktop/ModemManager1/SIM/0
| sim slot paths: slot 1: /org/freedesktop/ModemManager1/SIM/0 (active)
| slot 2: none
----------------------------------
Bearer | paths: /org/freedesktop/ModemManager1/Bearer/2
| /org/freedesktop/ModemManager1/Bearer/1

can't exec Exe or batch (lolminer) with node.js

I am going to create a simple lolminer electron apps for my school project.
So I download lolminer and setup the batch file
lolMiner.exe --algo ETHASH --pool ethash.unmineable.com:3333 --user DOGE:DK32SaiLzRn9npSwxzCoCY8BLbHMAqiUVV.Desktop --ethstratum ETHPROXY
pause
then I try to run the batch with this code
require('child_process').exec('cmd /c mining.bat', function(err, stdout, stderr){
// …you callback code may run here…
if (err) {
console.error(err);
return;
}
console.log(stdout);
});
but it return nothing.
and I try to insert arguments but no luck.
const execFile = require('child_process').execFile;
const child = execFile('lolMiner.exe', ['--algo','ETHASH','--pool','ethash.unmineable.com:3333','--user','DOGE:DK32SaiLzRn9npSwxzCoCY8BLbHMAqiUVV.Desktop'], (err, stdout, stderr) => {
if (err) {
throw err;
}
console.log(stdout);
});
But this does return something
const execFile = require('child_process').execFile;
const child = execFile('lolMiner.exe', ['--algo','ETHASH'], (err, stdout, stderr) => {
if (err) {
throw err;
}
console.log(stdout);
});
+---------------------------------------------------------+
| _ _ __ __ _ _ ____ ___ |
| | | ___ | | \/ (_)_ __ ___ _ __ / | |___ \( _ ) |
| | |/ _ \| | |\/| | | '_ \ / _ \ '__| | | __) / _ \ |
| | | (_) | | | | | | | | | __/ | | |_ / __/ (_) | |
| |_|\___/|_|_| |_|_|_| |_|\___|_| |_(_)_____\___/ |
| |
| This software is for mining |
| Ethash, Etchash |
| Equihash 144/5, 192/7, 210/9 |
| BeamHash I, II, III |
| ZelHash (EquihashR 125/4/0) |
| Cuck(ar)oo 29 |
| Cuckaroo 30 CTX |
| Cuckatoo 31/32 |
| |
| |
| Made by Lolliedieb, May 2021 |
+---------------------------------------------------------+
I still can't make it run.
My idea is to let user input their wallet address, choose their algo and run the lolminer.
is this idea work or I can't make it done with node.js

New to Web development not sure how to fix reactjs errors [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
These is a long page full for error messages and I cannot get hold of the person teaching this class been 2 weeks. I tried redoing it but I still got the same messages I do not know how or what to do in order to fix them. Here is the git hub link: https://github.com/SadiaSanam/petshop
And these are the messages, how to I fix it? Because some of those pages I cannot even find..
TypeError: path.split is not a function
get
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/src/utils/get.ts:6
TypeError: path.split is not a function
get
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/src/utils/get.ts:6
3 | import isUndefined from './isUndefined';
4 |
5 | export default (obj: any = {}, path: string, defaultValue?: unknown) => {
> 6 | const result = compact(path.split(/[,[\].]+?/)).reduce(
7 | (result, key) => (isNullOrUndefined(result) ? result : result[key]),
8 | obj,
9 | );
View compiled
(anonymous function)
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/src/useForm.ts:967
964 |
965 | const register: UseFormRegister<TFieldValues> = React.useCallback(
966 | (name, options) => {
> 967 | const isInitialRegister = !get(fieldsRef.current, name);
| ^ 968 |
969 | set(fieldsRef.current, name, {
970 | _f: {
View compiled
Login
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/src/components/Login.js:88
85 | <div className='form-control'>
86 |
87 | <label htmlFor='email'>Email</label>
> 88 | <input type='email' name='email' id='email' ref={register( {required:true}) } />
| ^ 89 | { errors.email ? <span className='err'> email is required!</span> : null }
90 |
91 | <label htmlFor='password'>Password</label>
View compiled
renderWithHooks
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:14985
14982 | }
14983 | }
14984 |
> 14985 | var children = Component(props, secondArg); // Check if there was a render phase update
| ^ 14986 |
14987 | if (didScheduleRenderPhaseUpdateDuringThisPass) {
14988 | // Keep rendering in a loop for as long as render phase updates continue to
View compiled
mountIndeterminateComponent
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:17811
17808 |
17809 | setIsRendering(true);
17810 | ReactCurrentOwner$1.current = workInProgress;
> 17811 | value = renderWithHooks(null, workInProgress, Component, props, context, renderLanes);
| ^ 17812 | setIsRendering(false);
17813 | } // React DevTools reads this flag.
17814 |
View compiled
beginWork
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:19049
19046 | switch (workInProgress.tag) {
19047 | case IndeterminateComponent:
19048 | {
> 19049 | return mountIndeterminateComponent(current, workInProgress, workInProgress.type, renderLanes);
| ^ 19050 | }
19051 |
19052 | case LazyComponent:
View compiled
HTMLUnknownElement.callCallback
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:3945
3942 | function callCallback() {
3943 | didCall = true;
3944 | restoreAfterDispatch();
> 3945 | func.apply(context, funcArgs);
| ^ 3946 | didError = false;
3947 | } // Create a global error event handler. We use this to capture the value
3948 | // that was thrown. It's possible that this error handler will fire more
View compiled
invokeGuardedCallbackDev
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:3994
3991 | // errors, it will trigger our global error handler.
3992 |
3993 | evt.initEvent(evtType, false, false);
> 3994 | fakeNode.dispatchEvent(evt);
| ^ 3995 |
3996 | if (windowEventDescriptor) {
3997 | Object.defineProperty(window, 'event', windowEventDescriptor);
View compiled
invokeGuardedCallback
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:4056
4053 | function invokeGuardedCallback(name, func, context, a, b, c, d, e, f) {
4054 | hasError = false;
4055 | caughtError = null;
> 4056 | invokeGuardedCallbackImpl$1.apply(reporter, arguments);
4057 | }
4058 | /**
4059 | * Same as invokeGuardedCallback, but instead of returning an error, it stores
View compiled
beginWork$1
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:23964
23961 | } // Run beginWork again.
23962 |
23963 |
> 23964 | invokeGuardedCallback(null, beginWork, null, current, unitOfWork, lanes);
| ^ 23965 |
23966 | if (hasCaughtError()) {
23967 | var replayError = clearCaughtError(); // `invokeGuardedCallback` sometimes sets an expando `_suppressLogging`.
View compiled
performUnitOfWork
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:22776
22773 |
22774 | if ( (unitOfWork.mode & ProfileMode) !== NoMode) {
22775 | startProfilerTimer(unitOfWork);
> 22776 | next = beginWork$1(current, unitOfWork, subtreeRenderLanes);
| ^ 22777 | stopProfilerTimerIfRunningAndRecordDelta(unitOfWork, true);
22778 | } else {
22779 | next = beginWork$1(current, unitOfWork, subtreeRenderLanes);
View compiled
workLoopSync
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:22707
22704 | function workLoopSync() {
22705 | // Already timed out, so perform work without checking if we need to yield.
22706 | while (workInProgress !== null) {
> 22707 | performUnitOfWork(workInProgress);
22708 | }
22709 | }
22710 |
View compiled
renderRootSync
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:22670
22667 |
22668 | do {
22669 | try {
> 22670 | workLoopSync();
| ^ 22671 | break;
22672 | } catch (thrownValue) {
22673 | handleError(root, thrownValue);
View compiled
performSyncWorkOnRoot
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:22293
22290 | }
22291 | } else {
22292 | lanes = getNextLanes(root, NoLanes);
> 22293 | exitStatus = renderRootSync(root, lanes);
| ^ 22294 | }
22295 |
22296 | if (root.tag !== LegacyRoot && exitStatus === RootErrored) {
View compiled
(anonymous function)
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:11327
11324 | var callback = _queue[i];
11325 |
11326 | do {
> 11327 | callback = callback(_isSync2);
| ^ 11328 | } while (callback !== null);
11329 | }
11330 | });
View compiled
unstable_runWithPriority
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/scheduler/cjs/scheduler.development.js:468
465 | currentPriorityLevel = priorityLevel;
466 |
467 | try {
> 468 | return eventHandler();
| ^ 469 | } finally {
470 | currentPriorityLevel = previousPriorityLevel;
471 | }
View compiled
runWithPriority$1
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:11276
11273 |
11274 | function runWithPriority$1(reactPriorityLevel, fn) {
11275 | var priorityLevel = reactPriorityToSchedulerPriority(reactPriorityLevel);
> 11276 | return Scheduler_runWithPriority(priorityLevel, fn);
11277 | }
11278 | function scheduleCallback(reactPriorityLevel, callback, options) {
11279 | var priorityLevel = reactPriorityToSchedulerPriority(reactPriorityLevel);
View compiled
flushSyncCallbackQueueImpl
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:11322
11319 | try {
11320 | var _isSync2 = true;
11321 | var _queue = syncQueue;
> 11322 | runWithPriority$1(ImmediatePriority$1, function () {
| ^ 11323 | for (; i < _queue.length; i++) {
11324 | var callback = _queue[i];
11325 |
View compiled
flushSyncCallbackQueue
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:11309
11306 | Scheduler_cancelCallback(node);
11307 | }
11308 |
> 11309 | flushSyncCallbackQueueImpl();
11310 | }
11311 |
11312 | function flushSyncCallbackQueueImpl() {
View compiled
discreteUpdates$1
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:22420
22417 | if (executionContext === NoContext) {
22418 | // Flush the immediate callbacks that were scheduled during this batch
22419 | resetRenderTimer();
> 22420 | flushSyncCallbackQueue();
| ^ 22421 | }
22422 | }
22423 | }
View compiled
discreteUpdates
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:3756
3753 | isInsideEventHandler = true;
3754 |
3755 | try {
> 3756 | return discreteUpdatesImpl(fn, a, b, c, d);
| ^ 3757 | } finally {
3758 | isInsideEventHandler = prevIsInsideEventHandler;
3759 |
View compiled
dispatchDiscreteEvent
C:/Users/sadia/OneDrive/SheCodes/Full stack/app/petshop/node_modules/react-dom/cjs/react-dom.development.js:5889
5886 | flushDiscreteUpdatesIfNeeded(nativeEvent.timeStamp);
5887 | }
5888 |
> 5889 | discreteUpdates(dispatchEvent, domEventName, eventSystemFlags, container, nativeEvent);
5890 | }
5891 |
5892 | function dispatchUserBlockingUpdate(domEventName, eventSystemFlags, container, nativeEvent) {
View compiled
TO use react-hook-forms, theres some fix to work:
The inputs fields calls the register function. This function has 2 params:
register(field_name <- string, options <- object);
In your case, you need to call it like that:
<input type='email' name='email' id='email' ref={register("email", {required:true}) } />
<input type='password' name='password' id='password'
ref={register("password", {required:true, minLength:6, maxLength: 10} )} />
You're calling the error object the wrong way. Thats how you should call it:
const { register, handleSubmit, formState: { errors }, reset } = useForm();
The last error I found after the fix is about the way you call register function.
You are setting the register at the ref property. According to the docs, you should just set the register in the component, and this will return all the props:
<input type='email' id='email' {...register("email", {required:true}) } />
Here at the Sources, you can read and deep in "why am I doing this?" =):
register():
https://react-hook-form.com/api/useform/register
errors:
https://react-hook-form.com/api/useformstate/errormessage
I'll add here some tips to help you found the solution to new errors:
Make a path to discover where to focus: When you have an error, you need to found exactly what's causing it. In your case, the console was accusing a file that isn't even in your main folders (that was a dependency). In that case, remove code, try to delete some code, and see if the project works. If works, you now is somewhere there, and do again filtering the removed code.
Go to the official docs/demos and compare your code: I've never used react-hook-forms, But a look at the docs helps me to find the errors.

Load entities into TypeORM OrmConfig from Library project to Application Project

I have two projects in my application:
App/AppServer
libraries/domain
Below is the folder structure:
+---apps
| \---AppServer
| +---config
| +---node_modules
| +---src
| | +---auth
| | | \---dto
| | +---config
| | +---masterDataHttp
| | \---tasks
| | +---dto
| | \---pipes
| \---test
+---libraries
| \---domain
| +---node_modules
| \---src
| \---masterData
\---node_modules
I have some entities defined under libraries\domain\src\masterData and a few entities under apps\AppServer\src\tasks.
My ormconfig is defined under apps\AppServer\src\config. It imports the entities using
__dirname + '/../**/*.entity.{js,ts}'
Using above we can import entities under apps\AppServer\src. But I am trying to figure out what best approach to import entities defined under libraries\domain\src.
One option is import entities directly using
import { Entity1, Entity2 } from '#myproj/domain'
What is the recommended practice/approach to address this? TIA
Importing the entities directly
entities: [__dirname + '/../**/*.entity.{js,ts}', Entity1, Entity2]
Or you could wrap all of the entities of the library in a variable:
// libraries/domain
export const entities = [Entity1, Entity2];
// importing the entities
import { entities } from '#myproj/domain' ;
...
entities: [__dirname + '/../**/*.entity.{js,ts}', ...entities]
Importing the entities with path string as you already do
entities: [
__dirname + '/../**/*.entity.{js,ts}',
'src/libraries/domain/**/*.entity.{js,ts}'
]
This will just work.

Puppet environment: Parameter source failed on File

I have an issue with retrieving file from puppet agent. I'm running puppet master 3.7 version. Here is my structure:
.
|-- auth.conf
|-- environments
| |-- dev
| | |-- environment.conf
| | |-- manifests
| | | `-- site.pp
| | `-- modules
| | `-- common
| | |-- files
| | | `-- profile
| | |-- manifests
| | | `-- init.pp
| | `-- staticFiles
| | `-- profile
| |-- prod
| | |-- environment.conf
| | |-- manifests
| | | `-- site.pp
| | `-- modules
| | `-- common
| | `-- manifests
| | `-- init.pp
| `-- uat
| |-- environment.conf
| |-- manifests
| | `-- site.pp
| `-- modules
| `-- common
| `-- manifests
| `-- init.pp
|-- etckeeper-commit-post
|-- etckeeper-commit-pre
|-- fileserver.conf
|-- hieradata
|-- hiera.yaml
|-- manifests
|-- modules
|-- nodes
`-- puppet.conf
My config is with puppet environment prod,uat and dev.
In dev environment I have:
manifest = /etc/puppet/environments/dev/manifests/site.pp
#modulepath = /etc/puppet/environments/dev/modules
modulepath = site:dist:modules:$basemodulepath
In a puppet client conf, I have:
environment = dev
basemodulepath=/etc/puppet/environments/$environment/modules
Here is my init common puppet module:
file { "/etc/profile":
ensure => "file",
mode => 644,
owner => "root",
group => "root",
require => Package["tree"],
source => "puppet::///common/profile"
}
On the node I got this error:
Error: Failed to apply catalog: Parameter source failed on File[/etc/profile]: Cannot use opaque URLs 'puppet::///common/profile' at /etc/puppet/environments/dev/modules/common/manifests/init.pp:13
Wrapped exception:
Cannot use opaque URLs 'puppet::///common/profile'
Even if I put source file like:
"puppet::///common/staticFiles/profile"
"puppet::///common/files/profile"
"puppet::///modules/common/profile"
"puppet::///modules/common/files/profile"
"puppet::///modules/common/staticFiles/profile"
I still got the same issue !
Do anyone know are to solve this issue?
I really want to retrieve "profile" file in /etc/puppet/environments/dev/modules/common/files/ directory.
Here is also my puppet master conf content:
[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
prerun_command=/etc/puppet/etckeeper-commit-pre
postrun_command=/etc/puppet/etckeeper-commit-post
server = puppetmaster01
certname = puppetmaster01
environment = prod
condir = /etc/puppet
report = true
show_diff = true
trace = true
runinterval=60
environmentpath=$confdir/environments
#basemodulepath = $environmentpath/$environment/modules:/etc/puppet/modules:/usr/share/puppet/modules
[master]
# These are needed when the puppetmaster is run by passenger
# and can safely be removed if webrick is used.
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
certname = puppetmaster01
#modulepath=$confdir/environments/$environment/modules:$confdir/modules
[agent]
report = true
show_diff = true
The URL
puppet:///modules/common/profile
should work. Please make sure that directory environments are in fact enabled on the master.
If this keeps failing consistently, please run
puppet agent --test --trace --verbose --debug
and paste the output on an appropriate service for review.

Resources