Azure function HTTP Request 404 when published to Azure through Docker and Visual Studio - azure

I'm attempting to learn some more about Azure Functions 2.0 and Docker containers to publish to my Azure instance. I followed to tutorial below with the only difference being that I published with docker to a container registry in azure using visual studio 2019.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-your-first-function-visual-studio
This all worked correctly and I was able to start my container and visit the site. However, in the example you can visit /api/function1 and get a response. This works on my localhost but on the live site it returns a 404. It seems that /api/function1 is not reachable after being published.
The app itself returns this when visiting the IP itself so I know it is working. Do I need to do something else in Azure to expose my APIs?
My container log only shows this.
Hosting environment: Production
Content root path: C:\
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
I grabbed my dockerfile from here
https://github.com/Azure/azure-functions-docker/blob/master/host/2.0/nanoserver-1809/Dockerfile
# escape=`
# Installer image
FROM mcr.microsoft.com/windows/servercore:1809 AS installer-env
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
# Retrieve .NET Core SDK
ENV DOTNET_SDK_VERSION 2.2.402
RUN Invoke-WebRequest -OutFile dotnet.zip https://dotnetcli.blob.core.windows.net/dotnet/Sdk/$Env:DOTNET_SDK_VERSION/dotnet-sdk-$Env:DOTNET_SDK_VERSION-win-x64.zip; `
$dotnet_sha512 = '0fa3bf476b560c8fc70749df37a41580f5b97334b7a1f19d66e32096d055043f4d7ad2828f994306e0a24c62a3030358bcc4579d2d8d439d90f36fecfb2666f6'; `
if ((Get-FileHash dotnet.zip -Algorithm sha512).Hash -ne $dotnet_sha512) { `
Write-Host 'CHECKSUM VERIFICATION FAILED!'; `
exit 1; `
}; `
`
Expand-Archive dotnet.zip -DestinationPath dotnet; `
Remove-Item -Force dotnet.zip
ENV ASPNETCORE_URLS=http://+:80 `
DOTNET_RUNNING_IN_CONTAINER=true `
DOTNET_USE_POLLING_FILE_WATCHER=true `
NUGET_XMLDOC_MODE=skip `
PublishWithAspNetCoreTargetManifest=false `
HOST_COMMIT=69f124faed40d20d9d8e5b8d51f305d249b21512 `
BUILD_NUMBER=12858
RUN [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; `
Invoke-WebRequest -OutFile host.zip https://github.com/Azure/azure-functions-host/archive/$Env:HOST_COMMIT.zip; `
Expand-Archive host.zip .; `
cd azure-functions-host-$Env:HOST_COMMIT; `
/dotnet/dotnet publish /p:BuildNumber=$Env:BUILD_NUMBER /p:CommitHash=$Env:HOST_COMMIT src\WebJobs.Script.WebHost\WebJobs.Script.WebHost.csproj --output C:\runtime
# Runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2.7-nanoserver-1809
COPY --from=installer-env ["C:\\runtime", "C:\\runtime"]
ENV AzureWebJobsScriptRoot=C:\approot `
WEBSITE_HOSTNAME=localhost:80
CMD ["dotnet", "C:\\runtime\\Microsoft.Azure.WebJobs.Script.WebHost.dll"]
Here's my function1 code for my azure function
public static class Function1
{
[FunctionName("Function1")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string productid = req.Query["productid"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
productid = productid ?? data?.product;
Product newProduct = new Product()
{
ProductNumber = 0,
ProductName = "Unknown",
ProductCost = 0
};
if (Convert.ToInt32(productid) ==1)
{
newProduct = new Product()
{
ProductCost = 100,
ProductName = "Lime Tree",
ProductNumber = 1
};
}
else if(Convert.ToInt32(productid) == 2)
{
newProduct = new Product()
{
ProductCost = 500,
ProductName = "Lemon Tree",
ProductNumber = 2
};
}
return productid != null
? (ActionResult)new JsonResult(newProduct)
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Here's a photo of my container running with my image.
I'm new to this so any advice would be helpful for sure!
Thanks!

First, I don't know if you really need to (or want to) run Functions on Windows containers. If you want to run in a container, I would probably opt for Linux. For that, this is an example Dockerfile. It does build on top of the Microsoft-provided base image. So you don't have to build that from scratch.
I'm sure there is also a base image for Windows that is already build. If you need it, just look around in the similiar repo I guess.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
COPY . ./
RUN dotnet publish myfunction -c Release -o myfunction /out
FROM mcr.microsoft.com/azure-functions/dotnet:3.0 AS base
WORKDIR /app
EXPOSE 80
COPY --from=build-env /app/ao-backendfunctions/out .
ENV AzureWebJobsScriptRoot=/app
ENV AzureFunctionsJobHost__Logging__Console__IsEnabled=true
The important part is RUN dotnet publish myfunction -c Release -o myfunction /out. Replace myfunction with the (folder) name of your actual Function.

#silent's answer was correct - Linux containers are the way to go for Azure Functions. My environment wasn't set up correctly for Linux containers but once I got a correct environment this worked out of the box.
Here's my latest DockerFile for another project that uses Linux Containers
See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/azure-functions/dotnet:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.0 AS build
WORKDIR /src
COPY ["FunctionTestAppLinux/FunctionTestAppLinux.csproj", "FunctionTestAppLinux/"]
RUN dotnet restore "FunctionTestAppLinux/FunctionTestAppLinux.csproj"
COPY . .
WORKDIR "/src/FunctionTestAppLinux"
RUN dotnet build "FunctionTestAppLinux.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "FunctionTestAppLinux.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV AzureWebJobsScriptRoot=/app

Related

Waypoint deployment through nomad getting errors HTTP: no host in request URL

I'm currently building an app by using react where I build a docker container to help with the deployment through Waypoint and Nomad. However, I'm currently getting the following issue:
! Put "http:///v1/jobs/parse": http: no Host in request URL
My docker file look like this:
FROM node:16
# Set the working directory to /app
WORKDIR /app
# Copy the package.json and package-lock.json files to the container
COPY package*.json ./
# Install the dependencies
RUN npm install
# Copy the rest of the application code to the container
COPY . .
# Expose port
EXPOSE 3000
# Specify the command to run the application
CMD [ "npm", "run", "start" ]
This is my nomad configuration file:
#example.nomad.tpl
job "web" {
datacenters = ["dc1"]
group "app" {
update {
max_parallel = 1
canary = 1
auto_revert = true
auto_promote = false
health_check = "task_states"
}
task "app" {
driver = "docker"
config {
image = "${artifact.image}:${artifact.tag}"
}
env {
%{ for k,v in entrypoint.env ~}
${k} = "${v}"
%{ endfor ~}
// For URL service
PORT = "3000"
}
}
}
}
This is my waypoint.hcl config file:
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0
project = "nomad-jobspec-nodejs"
app "nodejs-jobspec-web" {
build {
use "pack" {}
registry {
use "docker" {
image = "hvaandres/my-app-nomad"
tag = "latest"
local = true
}
}
}
deploy {
use "nomad-jobspec" {
// Templated to perhaps bring in the artifact from a previous
// build/registry, entrypoint env vars, etc.
jobspec = templatefile("${path.app}/example.nomad.tpl")
}
}
release {
use "nomad-jobspec-canary" {
groups = [
"app"
]
fail_deployment = false
}
}
}
I'm new to this tool and I wonder if anyone can redirect me in the right direction on how to solve this problem.

Dockerized Logic App dont work when container is running, but work in debug mode on VS Code

I am trying to put inside a docker image an Azure Logic Apps.
I was following some Microsoft tutorials:
This for creating the Logic Apps (is a bit outdate, but is mainly valid): https://microsoft.github.io/AzureTipsAndTricks/blog/tip304.html
And this for make the docker image: https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-tips-and-tricks-how-to-run-logic-apps-in-a-docker/ba-p/3545220
The only diference between this tutorials and my poc is that I am using the node mode of the Logic Apps, instead of Net Core, and the dockerfile that I am using:
FROM mcr.microsoft.com/azure-functions/node:3.0
ENV AzureWebJobsStorage DefaultEndpointsProtocol=https;AccountName=logicappsexamples;AccountKey=AHaGR5SQZYdB2LgS2+pPbsFQO3eZDZ25T5EV3mcc1ZWJXOk7QTCKEpjDcyD6lp2J9MYo+c1OcpLu+ASt8aoEWg==;EndpointSuffix=core.windows.net
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true \
FUNCTIONS_V2_COMPATIBILITY_MODE=true
ENV WEBSITE_HOSTNAME localhost
ENV WEBSITE_SITE_NAME test
ENV AZURE_FUNCTIONS_ENVIRONMENT Development
COPY . /home/site/wwwroot
RUN cd /home/site/wwwroot
The logic app is simple, just put a message in a queue when you call to an url. In debug mode on VSCode all work fine. But the problem come when run the dockerized logic app.
The problems come when I run the image. The logic App have to use a queue called "test", but when the container end to set up, it create a new queue:
[![enter image description here][1]][1]
An in the last step of the last tutorial (https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-tips-and-tricks-how-to-run-logic-apps-in-a-docker/ba-p/3545220) when I call to the trigger url, I dont recibe anything in neither od the queue.
I have receibed this logs from the running container:
info: Host.Triggers.Workflows[206]
Workflow action ends.
flowName='Stateless1',
actionName='Put_a_message_on_a_queue_(V2)',
flowId='2731d82fc1324e4fb0df69fd5c549d72',
flowSequenceId='08585288407219836302',
flowRunSequenceId='08585288406768053370693349962CU00',
correlationId='ebf4e18e-405f-41c8-bb6a-bdf84d4a7a16',
status='Failed',
statusCode='BadRequest',
error='', durationInMilliseconds='910',
inputsContentSize='-1',
outputsContentSize='-1',
extensionVersion='1.2.18.1',
siteName='test',
slotName='',
actionTrackingId='1439f372-4ce0-4709-b9ab-ee18db8839ae',
clientTrackingId='08585288406768053370693349962CU00',
properties='{
"$schema":"2016-06-01",
"startTime":"2023-01-03T17:16:48.9639703Z",
"endTime":"2023-01-03T17:16:49.8743232Z",
"status":"Failed",
"code":"BadRequest",
"executionClusterType":"Classic",
"resource":{
"workflowId":"2731d82fc1324e4fb0df69fd5c549d72",
"workflowName":"Stateless1",
"runId":"08585288406768053370693349962CU00",
"actionName":"Put_a_message_on_a_queue_(V2)"
},
"correlation":{
"actionTrackingId":"1439f372-4ce0-4709-b9ab-ee18db8839ae",
"clientTrackingId":"08585288406768053370693349962CU00"
},
"api":{}
}',
actionType='ApiConnection',
sequencerType='Linear',
flowScaleUnit='cu00',
platformOptions='RunDistributionAcrossPartitions, RepetitionsDistributionAcrossSequencers, RunIdTruncationForJobSequencerIdDisabled, RepetitionPreaggregationEnabled',
retryHistory='',
failureCause='',
overrideUsageConfigurationName='',
hostName='',
activityId='46860f56-96bc-462a-b9bc-3aed4ad6464c'.
info: Host.Triggers.Workflows[202]
Workflow run ends.
flowName='Stateless1',
flowId='2731d82fc1324e4fb0df69fd5c549d72',
flowSequenceId='08585288407219836302',
flowRunSequenceId='08585288406768053370693349962CU00',
correlationId='ebf4e18e-405f-41c8-bb6a-bdf84d4a7a16',
extensionVersion='1.2.18.1',
siteName='test',
slotName='',
status='Failed',
statusCode='ActionFailed',
error='{
"code":"ActionFailed",
"message":"An action failed. No dependent actions succeeded."
}',
durationInMilliseconds='1202',
clientTrackingId='08585288406768053370693349962CU00',
properties='{
"$schema":"2016-06-01",
"startTime":"2023-01-03T17:16:48.7228752Z",
"endTime":"2023-01-03T17:16:50.0174324Z",
"status":"Failed",
"code":"ActionFailed",
"executionClusterType":"Classic",
"resource":{
"workflowId":"2731d82fc1324e4fb0df69fd5c549d72",
"workflowName":"Stateless1",
"runId":"08585288406768053370693349962CU00",
"originRunId":"08585288406768053370693349962CU00"
},
"correlation":{
"clientTrackingId":"08585288406768053370693349962CU00"
},
"error":{
"code":"ActionFailed",
"message":"An action failed. No dependent actions succeeded."
}
}',
sequencerType='Linear',
flowScaleUnit='cu00',
platformOptions='RunDistributionAcrossPartitions, RepetitionsDistributionAcrossSequencers, RunIdTruncationForJobSequencerIdDisabled, RepetitionPreaggregationEnabled',
kind='Stateless',
runtimeOperationOptions='None',
usageConfigurationName='',
hostName='',
activityId='c6ca440e-fef5-457a-a4cb-9d5a3d806518'.
info: Host.Triggers.Workflows[203]
Workflow trigger starts.
flowName='Stateless1',
triggerName='manual',
flowId='2731d82fc1324e4fb0df69fd5c549d72',
flowSequenceId='08585288407219836302',
extensionVersion='1.2.18.1',
siteName='test',
slotName='',
status='',
statusCode='',
error='',
durationInMilliseconds='-1',
flowRunSequenceId='08585288406768053370693349962CU00',
inputsContentSize='-1',
outputsContentSize='-1',
clientTrackingId='08585288406768053370693349962CU00',
properties='{
"$schema":"2016-06-01",
"startTime":"2023-01-03T17:16:48.6768387Z",
"status":"Succeeded",
"fired":true,
"resource":{
"workflowId":"2731d82fc1324e4fb0df69fd5c549d72",
"workflowName":"Stateless1",
"runId":"08585288406768053370693349962CU00",
"triggerName":"manual"
},
"correlation":{
"clientTrackingId":"08585288406768053370693349962CU00"
},
"api":{}
}',
triggerType='Request',
flowScaleUnit='cu00',
triggerKind='Http',
sourceTriggerHistoryName='',
failureCause='',
hostName='',
activityId='ebf4e18e-405f-41c8-bb6a-bdf84d4a7a16'.
info: Host.Triggers.Workflows[204]
Workflow trigger ends.
flowName='Stateless1',
triggerName='manual',
flowId='2731d82fc1324e4fb0df69fd5c549d72',
flowSequenceId='08585288407219836302',
status='Succeeded',
statusCode='',
error='',
extensionVersion='1.2.18.1',
siteName='test',
slotName='',
durationInMilliseconds='1348',
flowRunSequenceId='08585288406768053370693349962CU00',
inputsContentSize='-1',
outputsContentSize='-1',
clientTrackingId='08585288406768053370693349962CU00',
properties='{
"$schema":"2016-06-01",
"startTime":"2023-01-03T17:16:48.6768387Z",
"endTime":"2023-01-03T17:16:50.0319177Z",
"status":"Succeeded",
"fired":true,
"resource":{
"workflowId":"2731d82fc1324e4fb0df69fd5c549d72",
"workflowName":"Stateless1",
"runId":"08585288406768053370693349962CU00",
"triggerName":"manual"
},
"correlation":{
"clientTrackingId":"08585288406768053370693349962CU00"
},
"api":{}
}',
triggerType='Request',
flowScaleUnit='cu00',
triggerKind='Http',
sourceTriggerHistoryName='',
failureCause='',
overrideUsageConfigurationName='',
hostName='',
activityId='ebf4e18e-405f-41c8-bb6a-bdf84d4a7a16'.
info: Function.Stateless1[2]
Executed 'Functions.Stateless1' (Succeeded, Id=10af0f54-765a-4954-a42f-373ceb58c94b, Duration=1545ms)
So, what I am doing bad? I mean, It look like as the info to de queue name (test) is not property passed to the docker imagen and for this reason, the docker image create a new one... but how I can fix it?
I would greatly appreciate any help... I cant find anything clear in internet.
Thanks!

Deploing a single bash script with nixops

I'm just starting to learn nix / nixos / nixops. I needed to install a simple bash script to remote host with nixops. And I can not realize how to do it. I have two files:
just-deploy-bash-script.nix
{
resources.sshKeyPairs.ssh-key = {};
test-host = { config, lib, pkgs, ... }: {
deployment.targetEnv = "digitalOcean";
deployment.digitalOcean.region = "sgp1";
deployment.digitalOcean.size = "s-2vcpu-4gb";
environment.systemPackages =
let
my-package = pkgs.callPackage ./my-package.nix { inherit pkgs; };
in [
pkgs.tmux
my-package
];
};
}
my-package.nix
{ pkgs ? import <nixpkgs> {}, ... }:
let
pname = "my-package";
version = "1.0.0";
stdenv = pkgs.stdenv;
in
stdenv.mkDerivation {
inherit pname version;
src = ./.;
installPhase =
let
script = pkgs.writeShellScriptBin "my-test" ''
echo This is my test script
'';
in
''
mkdir $out;
cp -r ${script} $out/
'';
}
I deploy as follows. I go to the directory in which these two files are located and then sequentially execute two commands:
nixops create -d test just-deploy-bash-script.nix
nixops deploy -d test
Deployment passes without errors and completes successfully. But when I login to the newly created remote host, I find that the tmux package from the standard set is present in the system, and my-package is absent:
nixops ssh -d test test-host
[root#test-host:~]# which tmux
/run/current-system/sw/bin/tmux
[root#test-host:~]# find /nix/store/ -iname tmux
/nix/store/hd1sgvb4pcllxj69gy3qa9qsns68arda-nixpkgs-20.03pre206749.5a3c1eda46e/nixpkgs/pkgs/tools/misc/tmux
/nix/store/609zdpfi5kpz2c7mbjcqjmpb4sd2y3j4-ncurses-6.0-20170902/share/terminfo/t/tmux
/nix/store/4cxkil2r3dzcf5x2phgwzbxwyvlk6i9k-system-path/share/bash-completion/completions/tmux
/nix/store/4cxkil2r3dzcf5x2phgwzbxwyvlk6i9k-system-path/bin/tmux
/nix/store/606ni2d9614sxkhnnnhr71zqphdam6jc-system-path/share/bash-completion/completions/tmux
/nix/store/606ni2d9614sxkhnnnhr71zqphdam6jc-system-path/bin/tmux
/nix/store/ddlx3x8xhaaj78xr0zasxhiy2m564m2s-nixos-17.09.3269.14f9ee66e63/nixos/pkgs/tools/misc/tmux
/nix/store/kvia4rwy9y4wis4v2kb9y758gj071p5v-ncurses-6.1-20190112/share/terminfo/t/tmux
/nix/store/c3m8qvmn2yxkgpfajjxbcnsgfrcinppl-tmux-2.9a/share/bash-completion/completions/tmux
/nix/store/c3m8qvmn2yxkgpfajjxbcnsgfrcinppl-tmux-2.9a/bin/tmux
[root#test-host:~]# which my-test
which: no my-test in (/root/bin:/run/wrappers/bin:/root/.nix-profile/bin:/etc/profiles/per-user/root/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin)
[root#test-host:~]# find /nix/store/ -iname *my-test*
[root#test-host:~]#
Help me figure out what's wrong with my scripts. Any links to documentation or examples of the implementation of such a task are welcome.
The shell can not find your script because it is copied into the wrong directory.
This becomes apparent after building my-package.nix:
$ nix-build my-package.nix
$ ls result/
zh5bxljvpmda4mi4x0fviyavsa3r12cx-my-test
Here you see the basename of a storepath inside a store path. This is caused by the line:
cp -r ${script} $out/
Changing it to something like this should fix that problem:
cp -r ${script}/* $out/

Bitbucket pipelines, use ENV VARS in a NodeJS script to deploy to S3 Deploy

Right now I have a bitbucket pipeline that works well with a single step like so:
( options: docker: true )
- docker build --rm -f Dockerfile-deploy ./ -t deploy --build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID --build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
This sets the keys in the Docker container, which then deploy's to an ELB using a bash script and the AWS-CLI to commit, so I don't actually try to expose the env vars, but eb deploy sure does and it works.
When trying to run a pipeline with the image: node:latest and the steps
- npm i
- npm run build ( Babel transpile )
- npm run deploy ( node script to send to S3 )
That final step I need the node script to have access to the env vars that I've added to the bitbucket repo pipelines config, instead I get a named variable representation of that variable:
// NodeJS Config File
module.exports = {
AWS_S3_BUCKET : process.env.AWS_S3_BUCKET || undefined,
AWS_ACCESS_KEY : process.env.AWS_ACCESS_KEY || undefined,
AWS_ACCESS_SECRET : process.env.AWS_ACCESS_SECRET || undefined,
}
-
// NodeJS deploy file... parts
const aws = {
params: {
Bucket: config.AWS_S3_BUCKET
},
accessKeyId: config.AWS_ACCESS_KEY,
secretAccessKey: config.AWS_ACCESS_SECRET,
distributionId: config.CLOUDFRONT_DISTRIBUTION_ID,
region: "us-east-1"
}
console.log('-----START AWS-----')
console.log(aws)
console.log('------END AWS------')
Then the bitbucket pipelines echo's this for the console.logs
-----START AWS-----
{ params: { Bucket: '$AWS_S3_BUCKET' },
accessKeyId: '$AWS_ACCESS_KEY',
secretAccessKey: '$AWS_ACCESS_SECRET',
distributionId: '$CLOUDFRONT_DISTRIBUTION_ID',
region: 'us-east-1' }
------END AWS------
Any thoughts?
Well my problem was that I copy pasted the variables from AWS with a trailing space... which is a character, but certainly not in the expected secret or key string. Oops

Yarn install production dependencies of a single package in workspace

I'm trying to install the production dependencies only for a single package in my workspace. Is that possible?
I've already tried this:
yarn workspace my-package-in-workspace install -- --prod
But it is installing all production dependencies of all my packages.
yarn 1 doesn't support it as far as I know.
If you are trying to install a specific package in a dockerfile, then there is a workaround:
copy the yarn.lock file and the root package.json
copy only the packages's package.json that you need: your package and which other packages that your package depends on (locally in the monorepo).
in the dockerfile, manually remove all the devDependnecies of all the package.json(s) that you copied.
run yarn install on the root package.json.
Note:
Deterministic installation - It is recommended to do so in monorepos to force deterministic install - https://stackoverflow.com/a/64503207/806963
Full dockefile example:
FROM node:12
WORKDIR /usr/project
COPY yarn.lock package.json remove-all-dev-deps-from-all-package-jsons.js change-version.js ./
ARG package_path=packages/dancer-placing-manager
COPY ${package_path}/package.json ./${package_path}/package.json
RUN node remove-all-dev-deps-from-all-package-jsons.js && rm remove-all-dev-deps-from-all-package-jsons.js
RUN yarn install --frozen-lockfile --production
COPY ${package_path}/dist/src ./${package_path}/dist/src
COPY ${package_path}/src ./${package_path}/src
CMD node --unhandled-rejections=strict ./packages/dancer-placing-manager/dist/src/index.js
remove-all-dev-deps-from-all-package-jsons.js:
const fs = require('fs')
const path = require('path')
const { execSync } = require('child_process')
async function deleteDevDeps(packageJsonPath) {
const packageJson = require(packageJsonPath)
delete packageJson.devDependencies
await new Promise((res, rej) =>
fs.writeFile(packageJsonPath, JSON.stringify(packageJson, null, 2), 'utf-8', error => (error ? rej(error) : res())),
)
}
function getSubPackagesPaths(repoPath) {
const result = execSync(`yarn workspaces --json info`).toString()
const workspacesInfo = JSON.parse(JSON.parse(result).data)
return Object.values(workspacesInfo)
.map(workspaceInfo => workspaceInfo.location)
.map(packagePath => path.join(repoPath, packagePath, 'package.json'))
}
async function main() {
const repoPath = __dirname
const packageJsonPath = path.join(repoPath, 'package.json')
await deleteDevDeps(packageJsonPath)
await Promise.all(getSubPackagesPaths(repoPath).map(packageJsonPath => deleteDevDeps(packageJsonPath)))
}
if (require.main === module) {
main()
}
It looks like this is easily possible now with Yarn 2: https://yarnpkg.com/cli/workspaces/focus
But I haven't tried myself.
Here is my solution for Yarn 1:
# Install dependencies for the whole monorepo because
# 1. The --ignore-workspaces flag is not implemented https://github.com/yarnpkg/yarn/issues/4099
# 2. The --focus flag is broken https://github.com/yarnpkg/yarn/issues/6715
# Avoid the target workspace dependencies to land in the root node_modules.
sed -i 's|"dependencies":|"workspaces": { "nohoist": ["**"] }, "dependencies":|g' apps/target-app/package.json
# Run `yarn install` twice to workaround https://github.com/yarnpkg/yarn/issues/6988
yarn || yarn
# Find all linked node_modules and dereference them so that there are no broken
# symlinks if the target-app is copied somewhere. (Don't use
# `cp -rL apps/target-app some/destination` because then it also dereferences
# node_modules/.bin/* and thus breaks them.)
cd apps/target-app/node_modules
for f in $(find . -maxdepth 1 -type l)
do
l=$(readlink -f $f) && rm $f && cp -rf $l $f
done
Now apps/target-app can be copied and used as a standalone app.
I would not recommend it for production. It is slow (because it installs dependencies for the whole monorepo) and not really reliable (because there may be additional issues with symlinks).
You may try
yarn workspace #my-monorepo/my-package-in-workspace install -- --prod

Resources